286 research outputs found

    Solving Differential Equations in R: Package deSolve

    Get PDF
    In this paper we present the R package deSolve to solve initial value problems (IVP) written as ordinary differential equations (ODE), differential algebraic equations (DAE) of index 0 or 1 and partial differential equations (PDE), the latter solved using the method of lines approach. The differential equations can be represented in R code or as compiled code. In the latter case, R is used as a tool to trigger the integration and post-process the results, which facilitates model development and application, whilst the compiled code significantly increases simulation speed. The methods implemented are efficient, robust, and well documented public-domain Fortran routines. They include four integrators from the ODEPACK package (LSODE, LSODES, LSODA, LSODAR), DVODE and DASPK2.0. In addition, a suite of Runge-Kutta integrators and special-purpose solvers to efficiently integrate 1-, 2- and 3-dimensional partial differential equations are available. The routines solve both stiff and non-stiff systems, and include many options, e.g., to deal in an efficient way with the sparsity of the Jacobian matrix, or finding the root of equations. In this article, our objectives are threefold: (1) to demonstrate the potential of using R for dynamic modeling, (2) to highlight typical uses of the different methods implemented and (3) to compare the performance of models specified in R code and in compiled code for a number of test cases. These comparisons demonstrate that, if the use of loops is avoided, R code can efficiently integrate problems comprising several thousands of state variables. Nevertheless, the same problem may be solved from 2 to more than 50 times faster by using compiled code compared to an implementation using only R code. Still, amongst the benefits of R are a more flexible and interactive implementation, better readability of the code, and access to RñÂÂs high-level procedures. deSolve is the successor of package odesolve which will be deprecated in the future; it is free software and distributed under the GNU General Public License, as part of the R software project.

    Next-Purchase Prediction Using Projections of Discounted Purchasing Sequences

    Get PDF
    A primary task of customer relationship management (CRM) is the transformation of customer data into business value related to customer binding and development, for instance, by offering additional products that meet customers’ needs. A customer’s purchasing history (or sequence) is a promising feature to better anticipate customer needs, such as the next purchase intention. To operationalize this feature, sequences need to be aggregated before applying supervised prediction. That is because numerous sequences might exist with little support (number of observations) per unique sequence, discouraging inferences from past observations at the individual sequence level. In this paper the authors propose mechanisms to aggregate sequences to generalized purchasing types. The mechanisms group sequences according to their similarity but allow for giving higher weights to more recent purchases. The observed conversion rate per purchasing type can then be used to predict a customer’s probability of a next purchase and target the customers most prone to purchasing a particular product. The bias– variance trade-off when applying the models to target customers with respect to the lift criterion are discussed. The mechanisms are tested on empirical data in the realm of cross-selling campaigns. Results show that the expected bias–variance behavior well predicts the lift achieved with the mechanisms. Results also show a superior performance of the proposed methods compared to commonly used segmentation-based approaches, different similarity measures, and popular class predictors. While the authors tested the approaches for CRM campaigns, their parameterization can be adjusted to operationalize sequential features of high cardinality also in other domains or business functions

    Applying Optimal Weight Combination in Hybrid Recommender Systems

    Get PDF
    We propose a method for learning weighting schemes in weighted hybrid recommender systems (RS) that is based on statistical forecast and portfolio theory. An RS predicts the future preference of a set of items for a user, and recommends the top items. A hybrid RS combines individual RS in making the predictions. To determine the weighting of individual RS, we learn so-called optimal weights from the covariance matrix of available error data of individual RS that minimize the error of a combined RS. We test the method on the well-known MovieLens 1M dataset, and, contrary to the “forecast combination puzzle”, stating that a simple average (SA) weighting typically outperforms learned weights, the out-of-sample results show that the learned weights consistently outperform the individually best RS as well as an SA combination

    Hybrid Recommender Systems for Next Purchase Prediction Based on Optimal Combination Weights

    Get PDF
    Recommender systems (RS) play a key role in e-commerce by preselecting presumably interesting products for customers. Hybrid RSs using a weighted average of individual RSs’ predictions have been widely adopted for improving accuracy and robustness over individual RSs. While for regression tasks, approaches to estimate optimal weighting schemes based on individual RSs’ out-of-sample errors exist, there is scant literature in classification settings. Class prediction is important for RSs in e-commerce, as here item purchases are to be predicted. We propose a method for estimating weighting schemes to combine classifying RSs based on the variance-covariance structures of the errors of individual models' probability scores. We evaluate the approach on a large real-world ecommerce data set from a European telecommunications provider, where it shows superior accuracy compared to the best individual model as well as a weighting scheme that averages the predictions using equal weights

    Improving Forecast Accuracy by Guided Manual Overwrite in Forecast Debiasing

    Get PDF
    We present ongoing work on a model-driven decision support system (DSS) that is aimed at providing guidance on reflecting and adjusting judgmental forecasts. We consider judgmental forecasts of cash flows generated by local experts in numerous subsidiaries of an international corporation. Forecasts are generated in a decentralized, non-standardized fashion, and corporate managers and controllers then aggregate the forecasts to derive consolidated, corporate-wide plans to manage liquidity and foreign exchange risk. However, it is well-known that judgmental predictions are often biased, where then statistical debiasing techniques can be applied to improve forecast accuracy. Even though debiasing can improve average forecast accuracy, many originally appropriate forecasts may be automatically corrected in the wrong direction, for instance, in cases where a forecaster might have considered knowledge on future events not derivable statistically from past time series. To prevent high-impact erroneous corrections, we propose to prompt a forecaster for action upon submission of a forecast that is out of the confidence bounds of a benchmark forecast. The benchmark forecast is derived from a statistical debiasing model that considers the past error patterns of a forecaster. Bounds correspond to percentiles of the error distribution of the debiased forecast. We discuss the determination of the confidence bounds and the selection of suspicious judgmental forecasts, types of (statistical) feedback to the forecasters, and the incorporation of the forecaster’s reactions (comments, revisions) in future debiasing strategies

    An Analysis of Design Problems in Combinatorial Procurement Auctions

    Get PDF
    Traditional auction mechanisms support price negotiations on a single item. The Internet allows for the exchange of much more complex offers in real-time. This is one of the reasons for much research on multidimensional auction mechanisms allowing negotiations on multiple items, multiple units, or multiple attributes of an item, as they can be regularly found in procurement. Combinatorial auctions, for example, enable suppliers to submit bids on bundles of items. A number of laboratory experiments has shown high allocative efficiency in markets with economies of scope. For suppliers it is easier to express cost savings due to bundling (e. g., decreased transportation or production costs). This can lead to significant savings in total cost of the procurement manager. Procurement negotiations exhibit a number of particularities: – It is often necessary to consider qualitative attributes or volume discounts in bundle bids. These complex bid types have not been sufficiently analyzed. – The winner determination problem requires the consideration of a number of additional business constraints, such as limits on the spend on a particular supplier or the number of suppliers. – Iterative combinatorial auctions have a number of advantages in practical applications, but they also lead to new problems in the determination of ask prices. In this paper, we will discuss fundamental problems in the design of combinatorial auctions and the particularities of procurement applications. Reprint of an article from WIRTSCHAFTSINFORMATIK 47(2)2005:126–134

    Feeding-Back Error Patterns to Stimulate Self-Reflection versus Automated Debiasing of Judgments

    Get PDF
    Automated debiasing, referring to automatic statistical correction of human estimations, can improve accuracy, whereby benefits are limited by cases where experts derive accurate judgments but are then falsely "corrected". We present ongoing work on a feedback-based decision support system that learns a statistical model for correcting identified error patterns observed on judgments of an expert. The model is then mirrored to the expert as feedback to stimulate self-reflection and selective adjustment of further judgments instead of using it for auto-debiasing. Our assumption is that experts are capable to incorporate the feedback wisely when making another judgment to reduce overall error levels and mitigate this false-correction problem. To test the assumption, we present the design and results of a pilot-experiment conducted. Results indicate that subjects indeed use the feedback wisely and selectively to improve their judgments and overall accuracy

    On Predictability of Revisioning in Corporate Cash Flow Forecasting

    Get PDF
    Financial services within corporations usually are part of an information system on which many business functions depend. As of the importance of forecast quality for financial services, means of forecast accuracy improvement, such as data-driven statistical prediction techniques and/or forecast support systems, have been subject to IS research since decades. In this paper we consider means of forecast improvement due to regular patterns in forecast revisioning. We analyze how business forecasts are adjusted to exploit possible improvements for the accuracy of forecasts with lower lead time. The empirical part bases on an unique dataset of experts\u27 cash flow forecasts and accountants\u27 actuals realizations of companies in a global corporation. We find that direction and magnitude of the final revision in aggregated forecasts can be related to suggested targets in earnings management, providing the means of improving the accuracy of longer-term cash flow forecasts

    Linear Hybrid Shrinkage of Weights for Forecast Selection and Combination

    Get PDF
    Forecast combination is an established methodology to improve forecast accuracy. The primary questions in the current literature are how many and which forecasts to include (selection) and how to weight the selected forecasts (weighting). Although integrating both tasks seems appealing, we are only aware of a few data analytical models that integrate both tasks. We introduce Linear Hybrid Shrinkage (LHS), a novel method that uses information criteria from statistical learning theory to select forecasters and then shrinks the selection from their in-sample optimal weights linearly towards equality, while shrinking the non-selected forecasts towards zero. Simulation results show conditions (scenarios) where LHS leads to higher accuracy than LASSO-based Shrinkage, Linear Shrinkage of in-sample optimal weights, and a simple averaging of forecasts
    • 

    corecore